6 research outputs found

    A Generalized Notion of Time for Modeling Temporal Networks

    Get PDF
    Most approaches for modeling and analyzing temporal networks do not explicitly discuss the underlying notion of time. In this paper, we therefore introduce a generalized notion of time for temporal networks. Our approach also allows for considering non-deterministic time and incomplete data, two issues that are often found when analyzing data-sets extracted from online social networks, for example. In order to demonstrate the consequences of our generalized notion of time, we also discuss the implications for the computation of (shortest) temporal paths in temporal networks

    A comprehensive Survey of the Actual Causality Literature

    No full text
    Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüftAbweichender Titel nach Übersetzung der Verfasserin/des VerfassersDer Forschungsbereich um Kausalität gewann in der Informatik in den letzten Jahren zunehmend an Bedeutung. Eine Definition die Kausalität formal erfasst würde es Computern ermöglichen, ”Warum”-Fragen zu beantworten und hätte vielversprechende Anwendungen im Bereich der Verifikation, des maschinellem Lernen, der Erklärbarkeit, dem formalen rechtlichem Schließen und der algorithmischer Gerechtigkeit führen. Um das Erreichen zu können müssen kausale Beziehungen aus Daten abgeleitet werden. Diese werden anschließend verwendet um die tatsächlichen Ursachen für Ereignisse in konkreten Situationen zu identifizieren. Das Ermitteln solcher Ursachen wird als token-kausale Inferenz bezeichnet. Nach jetzigem Kenntnisstand existiert kein ausreichend umfangreiches Werk, welches den aktuellen Stand der Technik im Bereich token-kausaler Inferenzsysteme offenlegt. Diese Dissertation soll das eben genannte Defizit begleichen. Die hierfür durchgeführte Literaturrecherche ist in drei verschiedene Granularitätsebenen unterteilt. Die erste Ebene betrachtet die Literatur als eigenständiges Studienobjekt. Im Kontext dessen werden Techniken der Netzwerkanalyse verwendet, um wichtige Publikationen, Autoren und Forschungsgemeinschaften zu identifizieren. Die zweite Ebene ist eine klassische Literaturrecherche, bei der eine Teilmenge der gesammelten Literatur im Detail untersucht wird. Das Ziel hierbei ist es die wichtigsten Werkzeuge zur Formalisierung von Kausalität zu extrahieren, zu beschreiben und zu kategorisieren. Dieser Teilmenge gehören unter anderem die formalen Sprachen zur Codierung kausaler Beziehungen an, aber auch die verschiedenen Kausalitätsdefinitionen sowie diverse Szenarien welche verwendet werden um diese Definitionen zu testen. Die dritte Ebene beschäftigt sich mit vier solcher Kausalitätsdefinitionen im Detail. Diese werden formal eingeführt und anhand der vorgestellten Testszenarien verglichen. Dieser letzte Teil erforderte einige Originalarbeiten, da nicht alle Szenarien in der Literatur formalisiert zu finden sind.The study of causality has recently gained traction in computer science. Formally capturing causal reasoning would allow computers to answer “Why”-questions and would result in significant advances in fields such as verification, machine learning, explainability, legal reasoning and algorithmic fairness. To accomplish this, one needs to be able to infer type causal relationships, i.e. general statements about causal dependencies, from data and then use those relationships to identify the actual causes of an event in a given situation; such causes are referred to as token causes. To the best of our knowledge, there does not exist a comprehensive survey, reviewing the state of the art of formal systems for token causality. The present thesis addresses this deficit. The literature review that we have performed operates on three different levels of granularity. The first considered the literature landscape itself as an object of study, employing network analysis techniques to identify important publications, authors and research communities. The second is a classical literature review, where a subset of the collected literature is investigated in detail, to extract, describe and categorise the tools used for formalising causation. This includes the languages for encoding causal relationships, the various definitions that try to capture token causality, as well as the benchmark used to test the capabilities of those definitions. In the third part we describe and compare the four main token causality definitions, w.r.t. the most prominent benchmarks in the literature. This last part also required some original work, as not all the examples are found in the literature.17

    LNCS

    No full text
    As AI and machine-learned software are used increasingly for making decisions that affect humans, it is imperative that they remain fair and unbiased in their decisions. To complement design-time bias mitigation measures, runtime verification techniques have been introduced recently to monitor the algorithmic fairness of deployed systems. Previous monitoring techniques assume full observability of the states of the (unknown) monitored system. Moreover, they can monitor only fairness properties that are specified as arithmetic expressions over the probabilities of different events. In this work, we extend fairness monitoring to systems modeled as partially observed Markov chains (POMC), and to specifications containing arithmetic expressions over the expected values of numerical functions on event sequences. The only assumptions we make are that the underlying POMC is aperiodic and starts in the stationary distribution, with a bound on its mixing time being known. These assumptions enable us to estimate a given property for the entire distribution of possible executions of the monitored POMC, by observing only a single execution. Our monitors observe a long run of the system and, after each new observation, output updated PAC-estimates of how fair or biased the system is. The monitors are computationally lightweight and, using a prototype implementation, we demonstrate their effectiveness on several real-world examples

    LNCS

    No full text
    Machine-learned systems are in widespread use for making decisions about humans, and it is important that they are fair, i.e., not biased against individuals based on sensitive attributes. We present runtime verification of algorithmic fairness for systems whose models are unknown, but are assumed to have a Markov chain structure. We introduce a specification language that can model many common algorithmic fairness properties, such as demographic parity, equal opportunity, and social burden. We build monitors that observe a long sequence of events as generated by a given system, and output, after each observation, a quantitative estimate of how fair or biased the system was on that run until that point in time. The estimate is proven to be correct modulo a variable error bound and a given confidence level, where the error bound gets tighter as the observed sequence gets longer. Our monitors are of two types, and use, respectively, frequentist and Bayesian statistical inference techniques. While the frequentist monitors compute estimates that are objectively correct with respect to the ground truth, the Bayesian monitors compute estimates that are correct subject to a given prior belief about the system’s model. Using a prototype implementation, we show how we can monitor if a bank is fair in giving loans to applicants from different social backgrounds, and if a college is fair in admitting students while maintaining a reasonable financial burden on the society. Although they exhibit different theoretical complexities in certain cases, in our experiments, both frequentist and Bayesian monitors took less than a millisecond to update their verdicts after each observation

    Runtime monitoring of dynamic fairness properties

    No full text
    A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run. This may happen when the system interacts with humans and feedback patterns emerge, reinforcing old biases in the system and creating new biases. While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time. Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild, and will output, with each event, a verdict on how fair the system is at the current point in time. The advantages of monitoring are two-fold. Firstly, fairness is evaluated at run-time, which is important because unfair behaviors may not be eliminated a priori, at design-time, due to partial knowledge about the system and the environment, as well as uncertainties and dynamic changes in the system and the environment, such as the unpredictability of human behavior. Secondly, monitors are by design oblivious to how the monitored system is constructed, which makes them suitable to be used as trusted third-party fairness watchdogs. They function as computationally lightweight statistical estimators, and their correctness proofs rely on the rigorous analysis of the stochastic process that models the assumptions about the underlying dynamics of the system. We show, both in theory and experiments, how monitors can warn us (1) if a bank’s credit policy over time has created an unfair distribution of credit scores among the population, and (2) if a resource allocator’s allocation policy over time has made unfair allocations. Our experiments demonstrate that the monitors introduce very low overhead. We believe that runtime monitoring is an important and mathematically rigorous new addition to the fairness toolbox

    Into the unknown: active monitoring of neural networks (extended version)

    No full text
    Neural-network classifiers achieve high accuracy when predicting the class of an input that they were trained to identify. Maintaining this accuracy in dynamic environments, where inputs frequently fall outside the fixed set of initially known classes, remains a challenge. We consider the problem of monitoring the classification decisions of neural networks in the presence of novel classes. For this purpose, we generalize our recently proposed abstraction-based monitor from binary output to real-valued quantitative output. This quantitative output enables new applications, two of which we investigate in the paper. As our first application, we introduce an algorithmic framework for active monitoring of a neural network, which allows us to learn new classes dynamically and yet maintain high monitoring performance. As our second application, we present an offline procedure to retrain the neural network to improve the monitor’s detection performance without deteriorating the network’s classification accuracy. Our experimental evaluation demonstrates both the benefits of our active monitoring framework in dynamic scenarios and the effectiveness of the retraining procedure.Algorithmic
    corecore